线性状态空间模型(SSM)的状态过渡矩阵的适当参数化,然后是标准非线性,使他们能够从顺序数据中有效地学习表示形式,从。在本文中,我们表明,当线性液体时恒定(LTC)状态空间模型给出诸如S4之类的结构SSM时,我们可以进一步改善。 LTC神经网络是带有输入依赖性状态过渡模块的因果连续神经网络,这使他们学会在推理时适应传入的输入。我们表明,通过使用对角和S4中引入的状态过渡矩阵的对角线加低级分解以及一些简化的基于LTC的结构状态空间模型(称为Liquid-S4)实现了新的最新最先进的最先进跨序列建模任务具有长期依赖性(例如图像,文本,音频和医疗时间序列)的艺术概括,在远程竞技场基准中的平均性能为87.32%。在完整的原始语音命令识别中,数据集Liquid-S4的精度达到96.78%,与S4相比,参数计数降低了30%。性能的额外增益是液体-S4的核结构的直接结果,该结构考虑了训练和推理过程中输入序列样本的相似性。
translated by 谷歌翻译
多传感器融合对于准确可靠的自主驾驶系统至关重要。最近的方法基于点级融合:通过相机功能增强激光雷达点云。但是,摄像头投影抛弃了相机功能的语义密度,阻碍了此类方法的有效性,尤其是对于面向语义的任务(例如3D场景分割)。在本文中,我们用BevFusion打破了这个根深蒂固的惯例,这是一个有效且通用的多任务多任务融合框架。它统一了共享鸟类视图(BEV)表示空间中的多模式特征,该空间很好地保留了几何信息和语义信息。为了实现这一目标,我们通过优化的BEV池进行诊断和提高视图转换中的钥匙效率瓶颈,从而将延迟降低了40倍以上。 BevFusion从根本上是任务不合时宜的,并且无缝支持不同的3D感知任务,几乎没有建筑变化。它在Nuscenes上建立了新的最新技术,在3D对象检测上获得了1.3%的MAP和NDS,而BEV MAP分段中的MIOU高13.6%,计算成本较低1.9倍。可以在https://github.com/mit-han-lab/bevfusion上获得复制我们结果的代码。
translated by 谷歌翻译
数据驱动的模拟器承诺高数据效率进行驾驶策略学习。当用于建模相互作用时,这种数据效率变为瓶颈:小型基础数据集通常缺乏用于学习交互式驾驶的有趣和具有挑战性的边缘案例。我们通过提出使用绘制的ADO车辆学习强大的驾驶策略的仿真方法来解决这一挑战。因此,我们的方法可用于学习涉及多代理交互的策略,并允许通过最先进的策略学习方法进行培训。我们评估了驾驶中学习标准交互情景的方法。在广泛的实验中,我们的工作表明,由此产生的政策可以直接转移到全规模的自治车辆,而无需使用任何传统的SIM-to-Real传输技术,例如域随机化。
translated by 谷歌翻译
仿真有可能改变在安全关键方案中部署的移动代理的强大算法的开发。然而,对现有模拟发动机的差的光敏性和缺乏不同的传感器方式保持关键障碍朝来实现这种潜力。在这里,我们呈现Vista,一个开源,数据驱动模拟器,用于为自动车辆集成多种类型的传感器。使用高保真度,实际数据集,Vista表示和模拟RGB摄像机,3D LIDAR和基于事件的相机,可以快速生成模拟中的新颖观点,从而富集可用于与难以实现的拐角案例的政策学习的数据在物理世界中捕获。使用Vista,我们展示了在每个传感器类型上培训和测试对控制策略的能力,并通过在全尺度自主车辆上进行展示这种方法的功率。在Vista中学到的政策展示了SIM-TEAR-REAL转移,而不是改进和更高的鲁棒性,而不是完全在现实世界数据上培训的鲁棒性。
translated by 谷歌翻译
连续深度学习架构能够学习用于预测建模的灵活概率模型作为神经常规差分方程(ODES),以及作为连续标准化流动的生成建模。在这项工作中,我们通过修剪网络架构来设计一个框架来破译这些连续深度模型的内部动态。我们的经验结果表明,修剪改善了生成建模中神经杂物的概括。我们经验表明,改进是因为修剪有助于避免模式塌陷并压平损耗表面。此外,修剪发现高效的神经颂歌表示,与原始网络相比,参数减少了98%,而不会损失准确性。我们希望我们的结果将使进一步研究现代持续深度模型的性能尺寸折衷。
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
Charisma is considered as one's ability to attract and potentially also influence others. Clearly, there can be considerable interest from an artificial intelligence's (AI) perspective to provide it with such skill. Beyond, a plethora of use cases opens up for computational measurement of human charisma, such as for tutoring humans in the acquisition of charisma, mediating human-to-human conversation, or identifying charismatic individuals in big social data. A number of models exist that base charisma on various dimensions, often following the idea that charisma is given if someone could and would help others. Examples include influence (could help) and affability (would help) in scientific studies or power (could help), presence, and warmth (both would help) as a popular concept. Modelling high levels in these dimensions for humanoid robots or virtual agents, seems accomplishable. Beyond, also automatic measurement appears quite feasible with the recent advances in the related fields of Affective Computing and Social Signal Processing. Here, we, thereforem present a blueprint for building machines that can appear charismatic, but also analyse the charisma of others. To this end, we first provide the psychological perspective including different models of charisma and behavioural cues of it. We then switch to conversational charisma in spoken language as an exemplary modality that is essential for human-human and human-computer conversations. The computational perspective then deals with the recognition and generation of charismatic behaviour by AI. This includes an overview of the state of play in the field and the aforementioned blueprint. We then name exemplary use cases of computational charismatic skills before switching to ethical aspects and concluding this overview and perspective on building charisma-enabled AI.
translated by 谷歌翻译
Deep learning-based 3D human pose estimation performs best when trained on large amounts of labeled data, making combined learning from many datasets an important research direction. One obstacle to this endeavor are the different skeleton formats provided by different datasets, i.e., they do not label the same set of anatomical landmarks. There is little prior research on how to best supervise one model with such discrepant labels. We show that simply using separate output heads for different skeletons results in inconsistent depth estimates and insufficient information sharing across skeletons. As a remedy, we propose a novel affine-combining autoencoder (ACAE) method to perform dimensionality reduction on the number of landmarks. The discovered latent 3D points capture the redundancy among skeletons, enabling enhanced information sharing when used for consistency regularization. Our approach scales to an extreme multi-dataset regime, where we use 28 3D human pose datasets to supervise one model, which outperforms prior work on a range of benchmarks, including the challenging 3D Poses in the Wild (3DPW) dataset. Our code and models are available for research purposes.
translated by 谷歌翻译
This article concerns Bayesian inference using deep linear networks with output dimension one. In the interpolating (zero noise) regime we show that with Gaussian weight priors and MSE negative log-likelihood loss both the predictive posterior and the Bayesian model evidence can be written in closed form in terms of a class of meromorphic special functions called Meijer-G functions. These results are non-asymptotic and hold for any training dataset, network depth, and hidden layer widths, giving exact solutions to Bayesian interpolation using a deep Gaussian process with a Euclidean covariance at each layer. Through novel asymptotic expansions of Meijer-G functions, a rich new picture of the role of depth emerges. Specifically, we find that the posteriors in deep linear networks with data-independent priors are the same as in shallow networks with evidence maximizing data-dependent priors. In this sense, deep linear networks make provably optimal predictions. We also prove that, starting from data-agnostic priors, Bayesian model evidence in wide networks is only maximized at infinite depth. This gives a principled reason to prefer deeper networks (at least in the linear case). Finally, our results show that with data-agnostic priors a novel notion of effective depth given by \[\#\text{hidden layers}\times\frac{\#\text{training data}}{\text{network width}}\] determines the Bayesian posterior in wide linear networks, giving rigorous new scaling laws for generalization error.
translated by 谷歌翻译
In this paper we study the smooth strongly convex minimization problem $\min_{x}\min_y f(x,y)$. The existing optimal first-order methods require $\mathcal{O}(\sqrt{\max\{\kappa_x,\kappa_y\}} \log 1/\epsilon)$ of computations of both $\nabla_x f(x,y)$ and $\nabla_y f(x,y)$, where $\kappa_x$ and $\kappa_y$ are condition numbers with respect to variable blocks $x$ and $y$. We propose a new algorithm that only requires $\mathcal{O}(\sqrt{\kappa_x} \log 1/\epsilon)$ of computations of $\nabla_x f(x,y)$ and $\mathcal{O}(\sqrt{\kappa_y} \log 1/\epsilon)$ computations of $\nabla_y f(x,y)$. In some applications $\kappa_x \gg \kappa_y$, and computation of $\nabla_y f(x,y)$ is significantly cheaper than computation of $\nabla_x f(x,y)$. In this case, our algorithm substantially outperforms the existing state-of-the-art methods.
translated by 谷歌翻译